56 research outputs found

    Delayed theory combination vs. Nelson-Oppen for satisfiability modulo theories: a comparative analysis

    Get PDF
    Most state-of-the-art approaches for Satisfiability Modulo Theories (SMT(T))(SMT(\mathcal{T})) rely on the integration between a SAT solver and a decision procedure for sets of literals in the background theory T(T-solver)\mathcal{T} (\mathcal{T}{\text {-}}solver) . Often T\mathcal{T} is the combination T1∪T2\mathcal{T}_1 \cup \mathcal{T}_2 of two (or more) simpler theories (SMT(T1∪T2))(SMT(\mathcal{T}_1 \cup \mathcal{T}_2)) , s.t. the specific Ti-solvers{\mathcal{T}_i}{\text {-}}solvers must be combined. Up to a few years ago, the standard approach to SMT(T1∪T2)SMT(\mathcal{T}_1 \cup \mathcal{T}_2) was to integrate the SAT solver with one combined T1∪T2-solver\mathcal{T}_1 \cup \mathcal{T}_2{\text {-}}solver , obtained from two distinct Ti-solvers{\mathcal{T}_i}{\text {-}}solvers by means of evolutions of Nelson and Oppen's (NO) combination procedure, in which the Ti-solvers{\mathcal{T}_i}{\text {-}}solvers deduce and exchange interface equalities. Nowadays many state-of-the-art SMT solvers use evolutions of a more recent SMT(T1∪T2)SMT(\mathcal{T}_1 \cup \mathcal{T}_2) procedure called Delayed Theory Combination (DTC), in which each Ti-solver{\mathcal{T}_i}{\text {-}}solver interacts directly and only with the SAT solver, in such a way that part or all of the (possibly very expensive) reasoning effort on interface equalities is delegated to the SAT solver itself. In this paper we present a comparative analysis of DTC vs. NO for SMT(T1∪T2)SMT(\mathcal{T}_1 \cup \mathcal{T}_2) . On the one hand, we explain the advantages of DTC in exploiting the power of modern SAT solvers to reduce the search. On the other hand, we show that the extra amount of Boolean search required to the SAT solver can be controlled. In fact, we prove two novel theoretical results, for both convex and non-convex theories and for different deduction capabilities of the Ti-solvers{\mathcal{T}_i}{\text {-}}solvers , which relate the amount of extra Boolean search required to the SAT solver by DTC with the number of deductions and case-splits required to the Ti-solvers{\mathcal{T}_i}{\text {-}}solvers by NO in order to perform the same tasks: (i) under the same hypotheses of deduction capabilities of the Ti-solvers{\mathcal{T}_i}{\text {-}}solvers required by NO, DTC causes no extra Boolean search; (ii) using Ti-solvers{\mathcal{T}_i}{\text {-}}solvers with limited or no deduction capabilities, the extra Boolean search required can be reduced down to a negligible amount by controlling the quality of the T\mathcal{T} -conflict sets returned by the ${\mathcal{T}_i}{\text {-}}solvers

    Accuracy of a CGM Sensor in Pediatric Subjects With Type 1 Diabetes. Comparison of Three Insertion Sites: Arm, Abdomen, and Gluteus

    Get PDF
    Patients with diabetes, especially pediatric ones, sometimes use continuous glucose monitoring (CGM) sensor in different positions from the approved ones. Here we compare the accuracy of Dexcom\uae G5 CGM sensor in three different sites: abdomen, gluteus (both approved) and arm (off-label)

    Resolution proof transformation for compression and interpolation

    Get PDF
    Verification methods based on SAT, SMT, and theorem proving often rely on proofs of unsatisfiability as a powerful tool to extract information in order to reduce the overall effort. For example a proof may be traversed to identify a minimal reason that led to unsatisfiability, for computing abstractions, or for deriving Craig interpolants. In this paper we focus on two important aspects that concern efficient handling of proofs of unsatisfiability: compression and manipulation. First of all, since the proof size can be very large in general (exponential in the size of the input problem), it is indeed beneficial to adopt techniques to compress it for further processing. Secondly, proofs can be manipulated as a flexible preprocessing step in preparation for interpolant computation. Both these techniques are implemented in a framework that makes use of local rewriting rules to transform the proofs. We show that a careful use of the rules, combined with existing algorithms, can result in an effective simplification of the original proofs. We have evaluated several heuristics on a wide range of unsatisfiable problems deriving from SAT and SMT test cases

    An extension of lazy abstraction with interpolation for programs with arrays

    Get PDF
    Lazy abstraction with interpolation-based refinement has been shown to be a powerful technique for verifying imperative programs. In presence of arrays, however, the method suffers from an intrinsic limitation, due to the fact that invariants needed for verification usually contain universally quantified variables, which are not present in program specifications. In this work we present an extension of the interpolation-based lazy abstraction framework in which arrays of unknown length can be handled in a natural manner. In particular, we exploit the Model Checking Modulo Theories framework to derive a backward reachability version of lazy abstraction that supports reasoning about arrays. The new approach has been implemented in a tool, called safari, which has been validated on a wide range of benchmarks. We show by means of experiments that our approach can synthesize and prove universally quantified properties over arrays in a completely automatic fashion

    Quantifier-Free Interpolation of a Theory of Arrays

    Get PDF
    The use of interpolants in model checking is becoming an enabling technology to allow fast and robust verification of hardware and software. The application of encodings based on the theory of arrays, however, is limited by the impossibility of deriving quantifier- free interpolants in general. In this paper, we show that it is possible to obtain quantifier-free interpolants for a Skolemized version of the extensional theory of arrays. We prove this in two ways: (1) non-constructively, by using the model theoretic notion of amalgamation, which is known to be equivalent to admit quantifier-free interpolation for universal theories; and (2) constructively, by designing an interpolating procedure, based on solving equations between array updates. (Interestingly, rewriting techniques are used in the key steps of the solver and its proof of correctness.) To the best of our knowledge, this is the first successful attempt of computing quantifier- free interpolants for a variant of the theory of arrays with extensionality

    Association of kidney disease measures with risk of renal function worsening in patients with type 1 diabetes

    Get PDF
    Background: Albuminuria has been classically considered a marker of kidney damage progression in diabetic patients and it is routinely assessed to monitor kidney function. However, the role of a mild GFR reduction on the development of stage 653 CKD has been less explored in type 1 diabetes mellitus (T1DM) patients. Aim of the present study was to evaluate the prognostic role of kidney disease measures, namely albuminuria and reduced GFR, on the development of stage 653 CKD in a large cohort of patients affected by T1DM. Methods: A total of 4284 patients affected by T1DM followed-up at 76 diabetes centers participating to the Italian Association of Clinical Diabetologists (Associazione Medici Diabetologi, AMD) initiative constitutes the study population. Urinary albumin excretion (ACR) and estimated GFR (eGFR) were retrieved and analyzed. The incidence of stage 653 CKD (eGFR < 60 mL/min/1.73 m2) or eGFR reduction > 30% from baseline was evaluated. Results: The mean estimated GFR was 98 \ub1 17 mL/min/1.73m2 and the proportion of patients with albuminuria was 15.3% (n = 654) at baseline. About 8% (n = 337) of patients developed one of the two renal endpoints during the 4-year follow-up period. Age, albuminuria (micro or macro) and baseline eGFR < 90 ml/min/m2 were independent risk factors for stage 653 CKD and renal function worsening. When compared to patients with eGFR > 90 ml/min/1.73m2 and normoalbuminuria, those with albuminuria at baseline had a 1.69 greater risk of reaching stage 3 CKD, while patients with mild eGFR reduction (i.e. eGFR between 90 and 60 mL/min/1.73 m2) show a 3.81 greater risk that rose to 8.24 for those patients with albuminuria and mild eGFR reduction at baseline. Conclusions: Albuminuria and eGFR reduction represent independent risk factors for incident stage 653 CKD in T1DM patients. The simultaneous occurrence of reduced eGFR and albuminuria have a synergistic effect on renal function worsening

    Delayed Theory Combination vs. Nelson-Oppen for Satisfiability Modulo Theories: a Comparative Analysis

    Get PDF
    Most state-of-the-art approaches for Satisfiability Modulo Theory (SMT(T)) rely on the integration between a SAT solver and a decision procedure for sets of literals in the background theory T (T-solver). Often T is the combination (T1 U T2) of two (or more) simpler theories (SMT(T1 U T2)), s.t. the specific Ti-Solvers must be combined. Up to a few years ago, the standard approach to SMT(T1 U T2) was to integrate the SAT solver with one combined (T1 U T2)-solver, obtained from two distinct Ti-Solvers by means of evolutions of Nelson and Oppen's (NO) combination procedure, in which the Ti-Solvers deduce and exchange interface equalities. Nowadays many state-of-the-art SMT solvers use evolutions of a more recent SMT(T1 U T2) procedure called Delayed Theory Combination (DTC), in which each Ti-Solver interacts directly and only with the SAT solver, in such a way that part or all of the (possibly very expensive) reasoning effort on interface equalities is delegated to the SAT solver itself. In this paper we present a comparative analysis of DTC vs. NO for SMT(T1 U T2). On the one hand, we explain the advantages of DTC in exploiting the power of modern SAT solvers to reduce the search. On the other hand, we show that the extra amount of Boolean search required to the SAT solver can be controlled. In fact, we prove two novel theoretical results, for both convex and non-convex theories and for different deduction capabilities of the Ti-Solvers, which relate the amount of extra Boolean search required to the SAT solver by DTC with the number of deductions and case-splits required to the Ti-Solvers by NO in order to perform the same tasks: (i) under the same hypotheses of deduction capabilities of the Ti-Solvers required by NO, DTC causes no extra Boolean search; (ii) using Ti-Solvers with limited or no deduction capabilities, the amount of extra Boolean search required can be reduced down to a negligible amount by controlling the quality of the T-conflict sets returned by the T-solvers
    • …
    corecore